Hug and hop: a discrete-time, nonreversible Markov chain Monte Carlo algorithm
نویسندگان
چکیده
Summary This article introduces the hug and hop Markov chain Monte Carlo algorithm for estimating expectations with respect to an intractable distribution. The alternates between two kernels, referred as hop. Hug is a nonreversible kernel that repeatedly applies bounce mechanism from recently proposed bouncy particle sampler produce proposal point far current position yet on almost same contour of target density, leading high acceptance probability. complemented by hop, which deliberately proposes jumps contours has efficiency degrades very slowly increasing dimension. There are many parallels Hamiltonian using leapfrog integrator, including order integration scheme, but also able make use local Hessian information without requiring implicit numerical steps, its performance not terminally affected unbounded gradients log-posterior. We test empirically variety toy targets real statistical models, find it can, often does, outperform Carlo.
منابع مشابه
Markov chain Monte Carlo for continuous-time discrete-state systems
A variety of phenomena are best described using dynamical models which operate on a discrete state space and in continuous time. Examples include Markov (and semiMarkov) jump processes, continuous-time Bayesian networks, renewal processes and other point processes. These continuous-time, discrete-state models are ideal building blocks for Bayesian models in fields such as systems biology, genet...
متن کاملMarkov Chain Monte Carlo
Markov chain Monte Carlo is an umbrella term for algorithms that use Markov chains to sample from a given probability distribution. This paper is a brief examination of Markov chain Monte Carlo and its usage. We begin by discussing Markov chains and the ergodicity, convergence, and reversibility thereof before proceeding to a short overview of Markov chain Monte Carlo and the use of mixing time...
متن کاملMarkov Chain Monte Carlo
This paper gives a brief introduction to Markov Chain Monte Carlo methods, which offer a general framework for calculating difficult integrals. We start with the basic theory of Markov chains and build up to a theorem that characterizes convergent chains. We then discuss the MetropolisHastings algorithm.
متن کاملMarkov chain Monte Carlo
One of the simplest and most powerful practical uses of the ergodic theory of Markov chains is in Markov chain Monte Carlo (MCMC). Suppose we wish to simulate from a probability density π (which will be called the target density) but that direct simulation is either impossible or practically infeasible (possibly due to the high dimensionality of π). This generic problem occurs in diverse scient...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Biometrika
سال: 2022
ISSN: ['0006-3444', '1464-3510']
DOI: https://doi.org/10.1093/biomet/asac039